5 results
52 Bayesian Logistic Regression Bias Adjustment for Data Observed without a Gold Standard: A Simulation Study of Clinical Alzheimer’s Disease
- William F Goette, Hudaisa Fatima, Jeff Schaffert, Anne R Carlew, Heidi Rossetti, Laura H Lacritz, C. Munro Cullum
-
- Journal:
- Journal of the International Neuropsychological Society / Volume 29 / Issue s1 / November 2023
- Published online by Cambridge University Press:
- 21 December 2023, pp. 259-260
-
- Article
-
- You have access Access
- Export citation
-
Objective:
Definitive diagnosis of Alzheimer’s disease (AD) is often unavailable, so clinical diagnoses with some degree of inaccuracy are often used in research instead. When researchers test methods that may improve clinical accuracy, the error in initial diagnosis can penalize predictions that are more accurate to true diagnoses but differ from clinical diagnoses. To address this challenge, the current study investigated the use of a simple bias adjustment for use in logistic regression that accounts for known inaccuracy in initial diagnoses.
Participants and Methods:A Bayesian logistic regression model was developed to predict unobserved/true diagnostic status given the sensitivity and specificity of an imperfect reference. This model considers cases as a mixture of true (with rate = sensitivity) and false positives (rate = 1 - specificity) while controls are mixtures of true (rate = specificity) and false negatives (rate = 1 - sensitivity). This bias adjustment was tested using Monte Carlo simulations over four conditions that varied the accuracy of clinical diagnoses. Conditions utilized 1000 iterations each generating a random dataset of n = 1000 based on a true logistic model with an intercept and three arbitrary predictors. Coefficients for parameters were randomly selected in each iteration and used to produce a set of two diagnoses: true diagnoses and observed diagnoses with imperfect accuracy. Sensitivity and specificity of the simulated clinical diagnosis varied with each of the four conditions (C): C1 = (0.77, 0.60), C2 = (0.87, 0.44), C3 = (0.71, 0.71), and C4 = (0.83, 0.55), which are derived from published values for clinical AD diagnoses against autopsy-confirmed pathology. Unadjusted and bias-adjusted logistic regressions were then fit to the simulated data to determine the models’ accuracy in estimating regression parameters and prediction of true diagnosis.
Results:Under all conditions, the bias-adjusted logistic regression model outperformed its unadjusted counterpart. Root mean square error (the variability of estimated coefficients around their true parameter values) ranged from 0.23 to 0.79 for the unadjusted model versus 0.24 to 0.29 for the bias-adjusted model. The empirical coverage rate (the proportion of 95% credible intervals that include their true parameter) ranged from 0.00 to 0.47 for the unadjusted model versus 0.95 to 0.96 for the bias-adjusted model. Finally, the bias-adjusted model produced the best overall diagnostic accuracy with correct classification of true diagnostic values about 78% of the time versus 62-72% without adjustment.
Conclusions:Results of this simulation study, which used published AD sensitivity and specificity statistics, provide evidence that bias-adjustments to logistic regression models are needed when research involves diagnoses from an imperfect standard. Results showed that unadjusted methods rarely identified true effects with credible intervals for coefficients including the true value anywhere from never to less than half of the time. Additional simulations are needed to examine the bias-adjusted model’s performance under additional conditions. Future research is needed to extend the bias adjustment to multinomial logistic regressions and to scenarios where the rate of misdiagnosis is unknown. Such methods may be valuable for improving detection of other neurological disorders with greater diagnostic error as well.
3 Separating Memory Impairment from Other Neuropsychological Deficits on the CVLT-II
- William F Goette, Jeff Schaffert, Anne R Carlew, David Denney, Heidi Rossetti, C. Munro Cullum, Laura H Lacritz
-
- Journal:
- Journal of the International Neuropsychological Society / Volume 29 / Issue s1 / November 2023
- Published online by Cambridge University Press:
- 21 December 2023, p. 678
-
- Article
-
- You have access Access
- Export citation
-
Objective:
Learning curve patterns on list-learning tasks can help clinicians determine the nature of memory difficulties, as an “impaired” score may actually reflect attention and/or executive difficulties rather than a true memory impairment. Though such pattern analysis is often qualitative, there are quantitative methods to assess these concepts that have been generally underutilized. This study aimed to develop a model that decomposes learning over repeated trials into separate cognitive processes and then include other testing data to predict performance at each trial as a function of general cognitive functioning.
Participants and Methods:Data for CVLT-II learning trials were obtained from an outpatient neuropsychology service within an academic medical center referred for clinical reasons. Participants with a cognitive diagnosis of non-demented (ND) or probable Alzheimer’s disease (AD) were included. The final sample consisted of 323 ND [Mage = 58.6 (14.8); Medu = 15.4 (2.7); 55.7% female] and 915 AD [Mage = 72.6 (9.0); Medu = 14.2 (3.1); 60.1% female cases. A Bayesian non-linear beta-binomial multilevel model was used, which uses three parameters to predict CVLT-II recall-by-trial: verbal attention span (VAS), maximal learning potential (MLP), and learning rate (LR). Briefly, VAS predicts expected first trial performance while MLP, conversely, predicts the expected best performance as trials are repeated, and LR weights the influence of VAS versus MLR over repeated trials. Predictors of these parameters included age, education, sex, race, and clinical diagnosis, in addition to raw scores on Trail Making Test Parts A and B, phonemic (FAS) fluency, animal fluency, Boston Naming Test, Wisconsin Card Sorting Test (WCST) Categories Completed, and then age-adjusted scaled scores from WAIS-IV Digit Span, Block Design, Vocabulary, and Coding. Random intercepts were included for each parameter and extracted for comparison of residual differences by diagnosis.
Results:The model explained 84% of the variance in CVLT-II raw scores. VAS reduced with age and time-to-complete Trails B but improved with both verbal fluencies and confrontation naming. MLP increased as a function of WAIS Digit Span, animal fluency, confrontation naming, and WCST categories completed. Finally, LR was greater for females and WAIS-IV Coding and Vocabulary performances but reduced with age. Participants with AD had lower estimates of all three parameters: Cohen’s d = 2.49 (VAS) - 3.48 (LR), though including demographic and neuropsychological tests attenuated differences, Cohen’s d = 0.34 (LR) - 0.95 (MLP).
Conclusions:The resulting model highlights how non-memory neuropsychological deficits affect list-learning test performance. At the same time, the model demonstrated that memory patterns on the CVLT-II can still be identified beyond other confounding deficits since having AD affected all parameters independent of other cognitive impairments. The modeling approach can generate conditional learning curves for individual patient data, and when multiple diagnoses are included in the model, a person-fit statistic can be computed to return the mostly likely diagnosis for an individual. The model can also be used in research to quantify or adjust for the effect of other patient data (e.g., neuroimaging, biomarkers, medications).
23 The Utility of Global versus Domain-specific Neuropsychological Test Score Dispersion as Markers of Cognitive Decline
- Hudaisa Fatima, Jeff Schaffert, Anne Carlew, Will Goette, Jessica Helphrey, Laura Lacritz, Heidi Rossetti, C. Munro Cullum
-
- Journal:
- Journal of the International Neuropsychological Society / Volume 29 / Issue s1 / November 2023
- Published online by Cambridge University Press:
- 21 December 2023, pp. 233-234
-
- Article
-
- You have access Access
- Export citation
-
Objective:
Higher baseline dispersion (intra-individual variability) across neuropsychological test scores at a single time-point has been associated with more rapid cognitive decline, onset of Mild Cognitive Impairment (MCI) and Alzheimer’s disease (AD), faster rates of hippocampal and entorhinal atrophy, and increased AD neuropathology. Comparison between predictions made from test score dispersion within a cognitive domain versus global, cross-domain dispersion is understudied. Global dispersion may be influenced by ability-and test-specific characteristics. This study examined the performance of global versus domain-specific dispersion metrics to identify which is most predictive of cognitive decline over time.
Participants and Methods:Data for baseline and five follow-up visits of 308 participants with normal cognition (Mage=73.90, SD=8.12) were selected from the National Alzheimer’s Coordinating Center (NACC) Dataset. Participants were required to have no focal neurological deficits, or history of depression, stroke, or heart attack. Diagnoses and progression to MCI and/or dementia were determined at each visit through consensus conferences. Raw neuropsychological scores were standardized using NACC norms. Global baseline dispersion was defined as the intraindividual standard deviation (ISD) across the 10 scores in the NACC battery. Domain-specific dispersions were calculated by constructing composites and ISD was computed across tests sampling their respective domains (executive functioning/attention/processing speed [EFAS], language, and memory; see Table 1 for details on these tests). Higher values on each of these metrics reflect greater dispersion. Multinomial logistic regression model fit statistics and parameter estimates were compared across four different models (global, EFAS, Language, and Memory dispersion) covarying for age, years of education, sex, race, ethnicity, and ApoE4 status. Models were compared using the Likelihood Ratio Test (LRT) and the Akaike Information Criteria (AIC) of Models statistics.
Results:Of the 308 participants, 70 (22.7%) progressed to MCI, and 82 (26.6%) progressed to dementia. Tables 1 and 2 show the results of the logistic regressions for the four models. All models fit the data well, with statistically significant predictions of conversion. Model 1 (global dispersion) showed a better fit than domain-specific models of dispersion per LRT and AIC values. Consistent with the results from mean differences between groups, parameter estimates showed that only global dispersion and EFAS dispersion significantly predicted conversion to dementia (when included with other covariates in models), with higher dispersion reflecting a greater risk of conversion.
Conclusions:In this sample, baseline global and EFAS dispersion measures significantly predicted conversion to dementia. Although global dispersion was a stronger predictor of dementia progression, findings suggest that executive functioning performance may be driving this relationship. A single index of global variability, from the calculation of standard deviation across test scores, may be supplementary for clinicians when distinguishing individuals at risk for dementia progression. None of the models were predictive of conversion to MCI. Further research is required to examine cognitive variability differences among patients who progress to MCI and patient-specific factors that may relate to test score dispersion and its utility in predicting the progression of symptoms.
Chapter 14 - Dementia
- Edited by Jacobus Donders, Scott J. Hunter, University of Chicago
-
- Book:
- Neuropsychological Conditions Across the Lifespan
- Published online:
- 27 July 2018
- Print publication:
- 16 August 2018, pp 268-285
-
- Chapter
- Export citation
Luria's three-step test: what is it and what does it tell us?
- Myron F. Weiner, Linda S. Hynan, Heidi Rossetti, Jed Falkowski
-
- Journal:
- International Psychogeriatrics / Volume 23 / Issue 10 / December 2011
- Published online by Cambridge University Press:
- 04 May 2011, pp. 1602-1606
-
- Article
- Export citation
-
Background: The purpose of this study is to determine if the three-step Luria test is useful for differentiating between cognitive disorders.
Methods: A retrospective record review of performance on the three-step Luria test was conducted on 383 participants from a university-based dementia clinic. The participants ranged in their diagnosis from frontotemporal dementia (FTD; n = 43), Alzheimer disease (AD; n = 153), mild cognitive impairment (MCI; n = 56), and normal controls (NC; n = 131). Performance of the Luria test was graded as normal or abnormal.
Results: An abnormal test occurred in 2.3% of NC, 21.4% of MCI, 69.8% of FTD, and 54.9% of AD subjects. The frequency of abnormal tests in all diagnostic groups increased with functional impairment as assessed by the Clinical Dementia Rating scale (CDR). When CDR = 3 (severe), 100% of the FTD and 72.2% of the AD subjects had abnormal Luria tests.
Conclusions: The three-step Luria test distinguished NC and persons with MCI from FTD and AD, but did not distinguish FTD from AD subjects.